Building a Multimodal Laughter Database for Emotion Recognition
نویسندگان
چکیده
Laughter is a significant paralinguistic cue that is largely ignored in multimodal affect analysis. In this work, we investigate how a multimodal laughter corpus can be constructed and annotated both with discrete and dimensional labels of emotions for acted and spontaneous laughter. Professional actors enacted emotions to produce acted clips, while spontaneous laughter was collected from volunteers. Experts annotated acted laughter clips, while volunteers who possess an acceptable empathic quotient score annotated spontaneous laughter clips. The data was pre-processed to remove noise from the environment, and then manually segmented starting from the onset of the expression until its offset. Our findings indicate that laughter carries distinct emotions, and that emotion in laughter is best recognized using audio information rather than facial information. This may be explained by emotion regulation, i.e. laughter is used to suppress or regulate certain emotions. Furthermore, contextual information plays a crucial role in understanding the kind of laughter and emotion in the enactment.
منابع مشابه
A Database for Automatic Persian Speech Emotion Recognition: Collection, Processing and Evaluation
Abstract Recent developments in robotics automation have motivated researchers to improve the efficiency of interactive systems by making a natural man-machine interaction. Since speech is the most popular method of communication, recognizing human emotions from speech signal becomes a challenging research topic known as Speech Emotion Recognition (SER). In this study, we propose a Persian em...
متن کاملAmplifying a Sense of Emotion toward Drama-Long Short-Term Memory Recurrent Neural Network for dynamic emotion recognition
This paper tried to amplify a sense of emotion toward drama. Using Long Short-Term Memory Recurrent Neural Network to model and predict dynamic emotion(Arousal and Valence) recognition. After building model, we transplant whole framework and take results from it on visualizing. We have two demo version: RGB version and Vignette version. RGB version is to modulate the RGB value of frame in video...
متن کاملThe eNTERFACE'05 Audio-Visual Emotion Database
This paper presents an audio-visual emotion database that can be used as a reference database for testing and evaluating video, audio or joint audio-visual emotion recognition algorithms. Additional uses may include the evaluation of algorithms performing other multimodal signal processing tasks, such as multimodal person identification or audio-visual speech recognition. This paper presents th...
متن کاملAnalysis and recoding of multimodal data
Emotions are part of our lives. Emotions can enhance the meaning of our communication. However, communication with computers is still done by keyboard and mouse. In this humancomputer interaction there is no room for emotions, whereas if we would communicate with machines the way we do in face-to-face communication much information can be extracted from the context and emotion of the speaker. W...
متن کاملMEC 2016: The Multimodal Emotion Recognition Challenge of CCPR 2016
Emotion recognition is a significant research filed of pattern recog‐ nition and artificial intelligence. The Multimodal Emotion Recognition Challenge (MEC) is a part of the 2016 Chinese Conference on Pattern Recognition (CCPR). The goal of this competition is to compare multimedia processing and machine learning methods for multimodal emotion recognition. The challenge also aims to provide a c...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2012